Goto

Collaborating Authors

 ai performance


NVIDIA Ups The Ante In Edge Computing With Jetson Orin Nano Developer Kit

#artificialintelligence

NVIDIA continues to push the envelope of AI accelerators - both in the data center and at the edge. Last month, it announced the availability of the Jetson Orin Nano Developer Kit, the latest addition to the Jetson family of devices. Initially announced in September 2022, the Jetson Orin Nano system-on-module (SoM) delivers 80x the performance of the previous generation Jetson Nano device. The developer kit puts the power of the SoM in the hands of developers by making it accessible. Below is a snapshot of the benchmark that compares the performance of AI vision models on Jetson Nano and Jetson Orin Nano.


Aetina and Hailo will launch Multi-Inference AI Solutions at the Edge - Coleda Pvt Ltd

#artificialintelligence

Together, Aetina and Hailo are launching multi-inference AI solutions that use 4x Hailo-8TM AI accelerators in the AI-MXM-H84A MXM, Aetina's AI inference platform (AIP-SQ67), and object recognition AI models. The AIP-SQ67 platform, powered by Aetina's AI-MXM-H84A MXM module, offers enough processing power to enable real-time video analytics processing and numerous, low-latency AI inference tasks at the edge, with up to 104 Tera-Operations Per Second (TOPS) of AI performance from Hailo's AI processors. The AI technologies are appropriate for a variety of applications in cities and transportation networks since they are capable of identifying diverse objects, such as people and vehicles, and evaluating large video datasets from multiple cameras simultaneously. Aetina and Hailo will present the solutions at ISC West 2023. The MegaEdge family member AIP-SQ67 from Aetina boasts an Intel 12th Gen CoreTM processor and expansion slots for up to two M.2 AI accelerators and one MXM.


Using edge AI processors to boost embedded AI performance

#artificialintelligence

The arrival of artificial intelligence (AI) in embedded computing has led to a proliferation of potential solutions that aim to deliver the high performance required to perform neural-network inferencing on streaming video at high rates. Though many benchmarks such as the ImageNet challenge work at comparatively low resolutions and can therefore be handled by many embedded-AI solutions, real-world applications in retail, medicine, security, and industrial control call for the ability to handle video frames and images at resolutions up to 4kp60 and beyond. Scalability is vital and not always an option with system-on-chip (SoC) platforms that provide a fixed combination of host processor and neural accelerator. Though they often provide a means of evaluating the performance of different forms of neural network during prototyping, such all-in-one implementations lack the granularity and scalability that real-world systems often need. In this case, industrial-grade AI applications benefit from a more balanced architecture where a combination of heterogeneous processors (e.g., CPUs, GPUs) and accelerators cooperate in an integrated pipeline to not just perform inferencing on raw video frames but take advantage of pre- and post-processing to improve overall results or handle format conversion to be able to deal with multiple cameras and sensor types.


Intel Collaboration With Deci Boosts AI Performance on Intel Hardware

#artificialintelligence

Scott Bair is a Senior Technical Creative Director for Intel Labs, chartered with growing awareness for Intel's leading-edge research activities, like AI, Neuromorphic Computing and Quantum Computing. Scott is responsible for driving marketing strategy, messaging, and asset creation for Intel Labs and its joint-research activities. In addition to his work at Intel, he has a passion for audio technology and is an active father of 5 children. Scott has over 23 years of experience in the computing industry bringing new products and technology to market. During his 15 years at Intel, he has worked in a variety of roles from R&D, architecture, strategic planning, product marketing, and technology evangelism.


Nvidia takes the wraps off Hopper, its latest GPU architecture

#artificialintelligence

Did you miss a session at the Data Summit? After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture, a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia's Ampere architecture, which launched roughly two years ago. The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that's designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia's MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs.


IBM Launches Research Collaboration Center to Drive Next-Generation AI Hardware

#artificialintelligence

Artificial intelligence has the potential to solve some of science and industry's most vexing challenges. But for that to happen, it needs a new generation of computer systems. Today, AI's ever increasing sophistication is pushing the boundaries of the industry's existing hardware systems as users find more ways to incorporate various sources of data from the edge, internet of things, and more. In the continued pursuit of more advanced hardware for the AI era, IBM is working across its Systems, Research and Watson divisions to take a fresh approach to AI, which requires significant changes in the fundamentals of systems and computing design. To help achieve AI's true potential, IBM, with support from New York State (NYS), SUNY Polytechnic Institute, and the founding partnership members, today announced an ambitious plan to create a global research hub to develop next-generation AI hardware and expand their joint research efforts in nanotechnology.